Claim 35 Post Templates from the 7 best LinkedIn Influencers

Get Free Post Templates
Aishwarya Naresh Reganti

Aishwarya Naresh Reganti

These are the best posts from Aishwarya Naresh Reganti.

12 viral posts with 2,717 likes, 138 comments, and 182 shares.
8 image posts, 0 carousel posts, 0 video posts, 2 text posts.

šŸ‘‰ Go deeper on Aishwarya Naresh Reganti's LinkedIn with the ContentIn Chrome extension šŸ‘ˆ

Best Posts by Aishwarya Naresh Reganti on LinkedIn

🄲 I’ve said this before and I’ll say it again: The "Attention is All You Need" paper isn’t the best starting point to learn about attention: it’ll just overwhelm you.

Find some good visual resources and the concept will be way easier to understand.

šŸŽ These are the only two blogs I used initially and keep going back to for refreshing my memory on everything about attention, self-attention, transformers, and the whole foundation of today’s LLMs. I love how visual and interactive they are.

1. Sequence to Sequence (seq2seq) and Attention by Lena Voita: https://lnkd.in/g3vN8ZZ4
2. The Illustrated Transformer by Jay Alammar: https://lnkd.in/eJk-yamh (I'm sure anyone who's worked in NLP before the LLM frenzy has definitely come across this one)

šŸ˜„ Read in the same order for better understanding

Added a few images from their blogs to give you a glimpse, but I highly recommend checking them out for the full experience!
Post image by Aishwarya Naresh Reganti
šŸ‘€ Really interesting paper that introduces the concept of "Agentic Context Engineering (ACE)"

...it basically treats an LLM’s context as an evolving playbook instead of a static input.

The authors say most current systems hit two issues:
⛳ Brevity bias: when prompts or contexts are optimized, they often get too short and generic, losing domain nuance or examples of failure.
⛳ Context collapse: when you keep rewriting or summarizing memory, you start dropping key details over time and the context becomes less useful.

ACE fixes that by turning context into a structured ā€œplaybookā€ that keeps growing. It stores learnings, strategies, failure cases, and patterns over time instead of replacing them.

It works through three roles:
⛳ Generator: handles the query using the current context, generating outputs, reasoning steps, and tool calls.
⛳ Reflector: looks at those runs and figures out what worked and what didn’t.
⛳ Curator: updates the context with those insights through small delta updates rather than full rewrites.

This grow-and-refine setup helps preserve detail and avoid context loss over time, while scaling to longer contexts without breaking. They test it both offline (improving prompts ahead of time) and online (updating context as the agent runs), and see good performance jumps across reasoning and agent tasks.

The way I see it, it’s basically a structured memory layer that learns to get better with experience. I’m not entirely sure how the three roles might interact in practice (there’s some risk of collusion, extra costs etc.) but it’s an interesting approach.

Link: https://lnkd.in/erEqqesT
Post image by Aishwarya Naresh Reganti
😲 Hugging Face just published an insanely comprehensive 200 page playbook on training SOTA LLMs.

To me, what stood out is how balanced it is about when it makes sense to train/fine-tune your own model and when it doesn’t: even though HF’s business technically benefits from people doing it.

It’s dense and will align best if you’ve trained models before or know your way around PyTorch (or similar), activations, loss functions, and attention.

But even if you don’t, the flowcharts are worth a look just to get a feel for how large-scale training actually flows.
šŸ˜… If you're tired of toy projects and demos and want a real look into how enterprise AI systems are built, this is for you.

You can now explore over 50 enterprise Agentic AI project demos across industries like healthcare, finance, education, real estate, retail, legal, and more.

Each demo includes architecture design, evaluations, guardrails, demo videos, and code.

We’ve brought together a showcase of all our Maven course capstone projects so you can see which enterprise problems can be solved in each vertical and how to design solutions with real intent.
Every project highlights:
⛳ Iterative system design
⛳ Thoughtful evaluation metrics and guardrails
⛳ Tradeoffs across cost, performance, effort, and latency
I
t’s a practical resource for anyone who wants to understand how enterprise-grade AI applications are actually built!
Link: https://lnkd.in/g3nXgkg7

You can also watch our current cohort’s demo day live (November 8th) using the link below:
https://lnkd.in/eFFcBJmw
Post image by Aishwarya Naresh Reganti
🫔 Congratulations Team India!

Thank you for giving every Indian woman and girl the courage to dream bigger, speak louder, and own her success unapologetically.

So proud of what you’ve achieved and what it represents for all of us! šŸš€
Post image by Aishwarya Naresh Reganti
🄳 I keep coming back to the awesome-llm-apps repo (70k+ stars 🌟 already), it’s packed with 70+ AI projects spanning RAG, multi-agent systems, MCP, voice apps, and more.

Huge shoutout to Shubham Saboo for the consistent dedication in maintaining it!

It’s a great starting point for project ideas: plenty of small, beginner-friendly builds to get started with, and a good mix of advanced ones to level up when you’re ready.

Link: https://lnkd.in/gUgnFFjW
Post image by Aishwarya Naresh Reganti
🤌 This is a great one-hour crash course on OpenAI’s Agent Builder.

It’s hands down the best walkthrough I’ve found so far if you want to learn how to build LLM automations and workflows without writing code.

Covers the essentials, solid examples, and enough detail to actually get you building!

Link: https://lnkd.in/ew4J6UmD
🫨 This is such a comprehensive blog on how Claude skills work under the hood!

Really fascinating work by Han-chung Lee, and I’m super glad I stumbled upon it along with the other fun articles.

It walks through the entire lifecycle using the skill-creator and internal-comms skills as case studies, covering everything from file parsing and API request structures to Claude’s decision-making process.

Link: https://lnkd.in/ejFi_k-V
Post image by Aishwarya Naresh Reganti
šŸ˜… If you only need to remember 5 things about building a successful AI startup in the current chaos, let it be these:

⛳ The AI products that survive will be the ones that scale well: Most tools can do the job in small settings, but break when you throw real enterprise data at them. The hard part isn’t what AI can do, it’s how reliable it stays when you scale it.
⛳Think of your system as something that learns, not something that just runs: The real moat is how well your AI absorbs company context over time, not how well it prompts or queries.
⛳Evals are useful, but don’t make them your religion: Sometimes you need hard metrics, sometimes you need intuition. Both matter.
⛳ Stop selling million-dollar pilots: Start with a 30-day trial inside the customer’s cloud, focus on 10–20 real users, price it like a contractor, and scale only when you see accuracy and adoption.
⛳ The next few years will be messy: Systems will start working, but unevenly. The winners will be the ones who embrace that change and design for it!

These were the takeaways straight from our AMA with Tanmai Gopal from PromptQL this week in our Maven course! I had way too many takeaways to count, but these stood out the most.

Always great hearing from people who are actually riding the wave and doing the work!

More on Tanmai's startup PromptQL and what they do: https://promptql.io/
šŸŽŠRAG isn’t dead. In fact, it’s one of the fastest-growing applications in the enterprise. Check out this curated list of 100+ top research papers on RAG published from March 2023 to today.

šŸ”‰ Over the past 2 years, the release of new LLMs and their increasing application across various fields has alsoĀ spurred a surge in research on RAG approaches. A ton of advanced methods have been proposed to boost RAG's efficiency in step with the wider adoption of LLMs.

šŸ’” I have compiled a selection of the most popular papers on RAG starting from March 2023 to the present, categorized as follows:

⛳ RAG Survey
šŸ“– Comprehensive overview of existing methods in RAG.

⛳ RAG Enhancement (Advanced Techniques)
šŸ“– Proposals for improving the efficiency and effectiveness of the RAG pipeline.

⛳ Retrieval Improvement
šŸ“– Techniques focused on enhancing the retrieval component of RAG.

⛳ Comparison Papers
šŸ“– Papers comparing RAG with other methods or approaches.

⛳ Domain-Specific RAG
šŸ“–Adaptation of RAG techniques for specific domains or applications.

⛳RAG Evaluation:
šŸ“–Assessment of the performance and effectiveness of RAG models.

⛳RAG Embeddings:
šŸ“–Methods for developing better embedding techniques optimized for RAG or retrieval in RAG.

⛳Input Processing for RAG:
šŸ“–Techniques for preprocessing input data to optimize the performance and effectiveness of RAG models.

Depending on the use-case, you can explore relevant papers to address various challenges and improve RAG. Happy Learning!

Link to the list: https://lnkd.in/ekN-2P9d
Post image by Aishwarya Naresh Reganti
😁 If there’s one report worth reading to get a 10,000-foot view of AI progress in 2025 across multiple dimensions, it’s this one.

My favorite, Airstreet Capital’s "State of AI Report 2025" is out, and it covers several key dimensions, including a new large-scale AI usage survey section:

Copied from their website:
⛳ Research: Technology breakthroughs and their capabilities
⛳Industry: Areas of commercial application and business impact
⛳Politics: Regulation, economic implications, and the evolving geopolitics of AI
⛳Safety: Identifying and mitigating catastrophic risks from highly capable future systems
⛳Survey: The largest open-access survey of 1,200 AI practitioners and their usage patterns (you can even participate)
⛳Predictions: Insights into what’s likely in the next 12 months, plus a review of last year’s forecasts to keep them accountable

It’s more useful for leaders than practitioners this time IMO, there’s no mention of context engineering, evals, or anything hands-on (though there is a section on vibe-coding).

Still, a good skim, especially if you pull it up in NotebookLM.

Link to the full report (300 slides): https://www.stateof.ai/
šŸ‘€ Salesforce just released a nice paper and repo on building enterprise deep research agents, definitely worth a look.

From their GitHub:

Enterprise Deep Research (EDR) is a multi-agent system that integrates:
⛳ A Master Planning Agent for adaptive query decomposition
⛳ Four specialized search agents (General, Academic, GitHub, and LinkedIn)
⛳ An extensible MCP-based tool ecosystem supporting NL2SQL, file analysis, and enterprise workflows
⛳ A Visualization Agent for data-driven insights
⛳ A Reflection mechanism that detects knowledge gaps and updates research direction with optional human-in-the-loop steering guidance
⛳ Real-time steering commands for continuous research refinement.ā€

Really interesting direction for enterprise-grade autonomous research systems.

Link: https://lnkd.in/e3x5Qzze
Post image by Aishwarya Naresh Reganti

Related Influencers